Goto

Collaborating Authors

 github action


Exploringand Unleashing the Power of Large Language Models in CI/CD Configuration Translation

Wang, Chong, Zhang, Chen, Wu, Jiajun, Guo, Wunan, Qu, Jianfeng, Tian, Yewen, Liu, Yang

arXiv.org Artificial Intelligence

Continuous Integration (CI) is a cornerstone of modern collaborative software development, and numerous CI platforms are available. Differences in maintenance overhead, reliability, and integration depth with code-hosting platforms make migration between CI platforms a common practice. A central step in migration is translating CI configurations, which is challenging due to the intrinsic complexity of CI configurations and the need to understand semantic differences and relationships across CI platforms. With the advent of large language models (LLMs), recent advances in software engineering highlight their potential for CI configuration translation. In this paper, we present a study on LLM-based CI configuration translation, focusing on the migration from Travis CI to GitHub Actions. First, using 811 migration records, we quantify the effort involved and find that developers read an average of 38 lines of Travis configuration and write 58 lines of GitHub Actions configuration, with nearly half of the migrations requiring multiple commits. We further analyze translations produced by each of the four LLMs and identify 1,121 issues grouped into four categories: logic inconsistencies (38%), platform discrepancies (32%), environment errors (25%), and syntax errors (5%). Finally, we evaluate three enhancement strategies and show that combining guideline-based prompting with iterative refinement achieves the best performance, reaching a Build Success Rate of 75.5%-nearly a threefold improvement over GPT-4o with a basic prompt.


A Serverless Architecture for Real-Time Stock Analysis using Large Language Models: An Iterative Development and Debugging Case Study

Ashraf, Taniv

arXiv.org Artificial Intelligence

The advent of powerful, accessible Large Language Models (LLMs) like Google's Gemini presents new opportunities for democratizing financial data analysis. This paper documents the design, implementation, and iterative debugging of a novel, serverless system for real-time stock analysis. The system leverages the Gemini API for qualitative assessment, automates data ingestion and processing via GitHub Actions, and presents the findings through a decoupled, static frontend. We detail the architectural evolution of the system, from initial concepts to a robust, event-driven pipeline, highlighting the practical challenges encountered during deployment. A significant portion of this paper is dedicated to a case study on the debugging process, covering common software errors, platform-specific permission issues, and rare, environment-level platform bugs. The final architecture operates at a near-zero cost, demonstrating a viable model for individuals to build sophisticated AI-powered financial tools. The operational application is publicly accessible, and the complete source code is available for review. We conclude by discussing the role of LLMs in financial analysis, the importance of robust debugging methodologies, and the emerging paradigm of human-AI collaboration in software development.


OUXT Polaris: Autonomous Navigation System for the 2022 Maritime RobotX Challenge

Okamoto, Kenta, Nagata, Akihisa, Arai, Kyoma, Nagao, Yusei, Nishimura, Tatsuki, Hirogaki, Kento, Tanaka, Shunya, Kobayashi, Masato, Sanada, Tatsuya, Kataoka, Masaya

arXiv.org Artificial Intelligence

OUXT-Polaris has been developing an autonomous navigation system by participating in the Maritime RobotX Challenge 2014, 2016, and 2018. In this paper, we describe the improvement of the previous vessel system. We also indicate the advantage of the improved design. Moreover, we describe the developing method under Covid-19 using simulation / miniture-size hardware and the feature components for the next RobotX Challenge.


Deploy Flask app using docker and GitHub actions on Heroku - Dragon Forest

#artificialintelligence

After creating the machine learning model, we can use the Flask framework to create API for web applications. Here I will teach you how to deploy the flask app to Heroku using docker and GitHub actions. With Docker and Github actions, you can create CI/CD pipeline for your machine learning project. Github action is used to create CI/CD pipeline. CI/CD means continuous integration and continuous deployment.


Automate Model Deployment with GitHub Actions and AWS

#artificialintelligence

This article was published as a part of the Data Science Blogathon. In a typical software development process, the deployment comes at the end of the software development life cycle. First, you build software, test it for possible faults, and finally deploy it for the end user's accessibility. The same can be applied to machine learning as well. In a previous article, I described how we could build a model, wrap it with a Rest API, containerize it, and finally deploy it on cloud services.


A Preliminary Investigation of MLOps Practices in GitHub

Calefato, Fabio, Lanubile, Filippo, Quaranta, Luigi

arXiv.org Artificial Intelligence

Background. The rapid and growing popularity of machine learning (ML) applications has led to an increasing interest in MLOps, that is, the practice of continuous integration and deployment (CI/CD) of ML-enabled systems. Aims. Since changes may affect not only the code but also the ML model parameters and the data themselves, the automation of traditional CI/CD needs to be extended to manage model retraining in production. Method. In this paper, we present an initial investigation of the MLOps practices implemented in a set of ML-enabled systems retrieved from GitHub, focusing on GitHub Actions and CML, two solutions to automate the development workflow. Results. Our preliminary results suggest that the adoption of MLOps workflows in open-source GitHub projects is currently rather limited. Conclusions. Issues are also identified, which can guide future research work.


GitHub - Nneji123/Serving-Machine-Learning-Models

#artificialintelligence

This repository contains instructions, template source code and examples on how to serve/deploy machine learning models using various frameworks and applications such as Docker, Flask, FastAPI, BentoML, Streamlit, MLflow and even code on how to deploy your machine learning model as an android app. The Repository also has code and how-to's for deploying your apps to various cloud platforms(AWS, Heroku, Vercel etc), working with Github actions for CI/CD(Continuous Integration and Continuous Development), TDD(Test driven development) with pytest and other useful information. Before we get into building and deploying our models we'll have to setup our environment. I use'pyenv' for managing different versions of python and pyenv-virtualenv for setting up virtual environments. You can learn how to install pyenv on your operating system by checking out their official github.


Manage ML Automation Workflow with DagsHub, GitHub Action, and CML

#artificialintelligence

Originally published on Towards AI the World's Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses. It's free, we don't spam, and we never share your email address.


rOpenSci News Digest, February 2022

#artificialintelligence

You can read this post on our blog. Now let's dive into the activity at and around rOpenSci! Consult our Events page to find your local time and how to join. Find out about more events. Maëlle Salmon (Research Software Engineer with rOpenSci) and Karthik Ram (rOpenSci executive director) authored a commentary "The R Developer Community Does Have a Strong Software Engineering Culture" in the latest issue of The R Journal edited by Di Cook, as a response to the discussion paper "Software Engineering and R Programming: A Call for Research" by Melina Vidoni (who's an Associate editor of rOpenSci Software Peer Review).


5 ways machine learning uses CI/CD in production

#artificialintelligence

Continuous integration (CI) is the process of all software developers merging their code changes in a central repository many times throughout the day. A fully automated software release process is called continuous delivery, abbreviated as CD. Although the two terms are not interchangeable, CI/CD is a DevOps methodology and fits in that category. A continuous integration/continuous delivery (CI/CD) pipeline is a system that automates the software delivery process. CI/CD pipelines generate code, run tests, and deliver new product versions when software is changed.